AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
17B parameter efficient inference

# 17B parameter efficient inference

Llama 4 Maverick 17B 128E Instruct 4bit
Other
This is a 4-bit quantized version of the Llama-4-Maverick-17B-128E-Instruct model, adapted for the MLX framework as a multilingual instruction-following model
Large Language Model Transformers Supports Multiple Languages
L
mlx-community
140
1
Llama 4 Maverick 17B 16E Instruct 4bit
Other
A 4-bit quantized model converted from meta-llama/Llama-4-Maverick-17B-128E-Instruct, supporting multilingual text generation tasks
Large Language Model Supports Multiple Languages
L
mlx-community
538
6
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase